9 research outputs found

    The WHY in Business Processes: Discovery of Causal Execution Dependencies

    Full text link
    A crucial element in predicting the outcomes of process interventions and making informed decisions about the process is unraveling the genuine relationships between the execution of process activities. Contemporary process discovery algorithms exploit time precedence as their main source of model derivation. Such reliance can sometimes be deceiving from a causal perspective. This calls for faithful new techniques to discover the true execution dependencies among the tasks in the process. To this end, our work offers a systematic approach to the unveiling of the true causal business process by leveraging an existing causal discovery algorithm over activity timing. In addition, this work delves into a set of conditions under which process mining discovery algorithms generate a model that is incongruent with the causal business process model, and shows how the latter model can be methodologically employed for a sound analysis of the process. Our methodology searches for such discrepancies between the two models in the context of three causal patterns, and derives a new view in which these inconsistencies are annotated over the mined process model. We demonstrate our methodology employing two open process mining algorithms, the IBM Process Mining tool, and the LiNGAM causal discovery technique. We apply it on a synthesized dataset and on two open benchmark data sets.Comment: 20 pages, 19 figure

    Extending Event-Driven Architecture for Proactive Systems

    Get PDF
    ABSTRACT Proactive Event-Driven Computing is a new paradigm, in which a decision is not made due to explicit users' requests nor is it made as a response to past events. Rather, the decision is autonomously triggered by forecasting future states. Proactive event-driven computing requires a departure from current event-driven architectures to ones capable of handling uncertainty and future events, and real-time decision making. We present a proactive event-driven architecture for Scalable Proactive Event-Driven Decision-making (SPEEDD), which combines these capabilities. The proposed architecture is composed of three main components: complex event processing, real-time decision making, and visualization. This architecture is instantiated by a real use case from the traffic management domain. In the future, the results of actual implementations of the use case will help us revise and refine the proposed architecture

    The Boost 4.0 Experience

    Get PDF
    In the last few years, the potential impact of big data on the manufacturing industry has received enormous attention. This chapter details two large-scale trials that have been implemented in the context of the lighthouse project Boost 4.0. The chapter introduces the Boost 4.0 Reference Model, which adapts the more generic BDVA big data reference architectures to the needs of Industry 4.0. The Boost 4.0 reference model includes a reference architecture for the design and implementation of advanced big data pipelines and the digital factory service development reference architecture. The engineering and management of business network track and trace processes in high-end textile supply are explored with a focus on the assurance of Preferential Certification of Origin (PCO). Finally, the main findings from these two large-scale piloting activities in the area of service engineering are discussed.publishersversionpublishe

    A Framework for Verifiable and Auditable Collaborative Anomaly Detection

    No full text
    Collaborative and Federated Leaning are emerging approaches to manage cooperation between a group of agents for the solution of Machine Learning tasks, with the goal of improving each agent’s performance without disclosing any data. In this paper we present a novel algorithmic architecture that tackle this problem in the particular case of Anomaly Detection (or classification of rare events), a setting where typical applications often comprise data with sensible information, but where the scarcity of anomalous examples encourages collaboration. We show how Random Forests can be used as a tool for the development of accurate classifiers with an effective insight-sharing mechanism that does not break the data integrity. Moreover, we explain how the new architecture can be readily integrated in a blockchain infrastructure to ensure the verifiable and auditable execution of the algorithm. Furthermore, we discuss how this work may set the basis for a more general approach for the design of collaborative ensemble-learning methods beyond the specific task and architecture discussed in this paper

    Reactive Rules Inference from Dynamic Dependency Models

    No full text
    Defining dependency models is sometimes an easier, more intuitive way for ontology representation than defining reactive rules directly, as it provides a higher level of abstraction. We will shortly introduce the ADI (Active Dependency Integration) model capabilities, emphasizing new developments: 1. Support of automatic dependencies instantiation from an abstract definition that expresses a general dependency in the ontology, namely a "template"

    FERARI: a prototype for complex event processing over streaming multi-cloud platforms

    No full text
    Summarization: In this demo, we present FERARI, a prototype that enables realtime Complex Event Processing (CEP) for large volume event data streams over distributed topologies. Our prototype constitutes, to our knowledge, the first complete, multi-cloud based end-to-end CEP solution incorporating: a) a user-friendly, web-based query authoring tool, (b) a powerful CEP engine implemented on top of a streaming cloud platform, (c) a CEP optimizer that chooses the best query execution plan with respect to low latency and/or reduced inter-cloud communication burden, and (d) a query analytics dashboard encompassing graph and map visualization tools to provide a holistic picture with respect to the detected complex events to final stakeholders. As a proof-of-concept, we apply FERARI to enable mobile fraud detection over real, properly anonymized, telecommunication data from T-Hrvatski Telekom network in Croatia.Presented on

    Demo: Complex event processing over streaming multi-cloud platforms - The FERARI approach

    No full text
    Summarization: We present FERARI, a prototype for processing voluminous event streams over multi-cloud platforms. At its core, FERARI both exploits the potential for in-situ (intra-cloud) processing and orchestrates inter-cloud complex event detection in a communicationefficient way. At the application level, it includes a user-friendly query authoring tool and an analytics dashboard providing granular reports about detected events. In that, FERARI constitutes, to our knowledge, the first complete end-to-end solution of its kind. In this demo, we apply the FERARI approach on a real scenario from the telecommunication domain.Presented on
    corecore